Which in that case demonstrates awareness among the AI researchers of the risk, while at the same time not demonstrating that Hutter finds it particularly likely that this would happen (‘might’) or agrees with any specific alarmist rhetoric. I can’t know if that’s what Carl actually refers to. I do assure you that about every AI researcher has seen the Terminator.
I gave the Hutter quote I was thinking of upthread.
My aim was basically to distinguish between buying Eliezer’s claims and taking intelligence explosion and AI risk seriously, and to reject the idea that the ideas in question came out of nowhere. One can think AI risk is worth investigating without thinking much of Eliezer’s views or SI.
I agree that the cited authors would assign much lower odds of catastrophe given human-level AI than Eliezer. The same statement would be true of myself, or of most people at SI and FHI: Eliezer is at the far right tail on those views. Likewise for the probability that a small team assembled in the near future could build safe AGI first, but otherwise catastrophe would have ensued.
Well, I guess that’s fair enough. In the quote on the top, though, I am specifically criticizing the extreme view. At the end of the day, the entire raison d’etre for SI’s existence is the claim that without paying you the risk would be higher. The claim that you are somehow fairy unique. And there are many risks—for example, risk of lethal flu-like pandemic—which are much more clearly understood and where specific efforts have much more clearly predictable outcome of reducing the risk. Favouring a group of AI theorists but not other does not have clearly predictable outcome of reducing the risk.
(I am inclined to believe that the pandemic is under funded as it would primarily decimate the poorer countries, ending existence of entire cultures, whereas the ‘existential risk’ is a fancy phrase for a risk to the privileged)
Which in that case demonstrates awareness among the AI researchers of the risk, while at the same time not demonstrating that Hutter finds it particularly likely that this would happen (‘might’) or agrees with any specific alarmist rhetoric.
It need not demonstrate any such thing to fit Carl’s statement perfectly and give the lie to your claim that he was misrepresenting Hutter.
I do assure you that about every AI researcher has seen the Terminator.
Sure, hence the Hutter citation of “(Cameron 1984)”. Oh wait.
Which in that case demonstrates awareness among the AI researchers of the risk, while at the same time not demonstrating that Hutter finds it particularly likely that this would happen (‘might’) or agrees with any specific alarmist rhetoric. I can’t know if that’s what Carl actually refers to. I do assure you that about every AI researcher has seen the Terminator.
I gave the Hutter quote I was thinking of upthread.
My aim was basically to distinguish between buying Eliezer’s claims and taking intelligence explosion and AI risk seriously, and to reject the idea that the ideas in question came out of nowhere. One can think AI risk is worth investigating without thinking much of Eliezer’s views or SI.
I agree that the cited authors would assign much lower odds of catastrophe given human-level AI than Eliezer. The same statement would be true of myself, or of most people at SI and FHI: Eliezer is at the far right tail on those views. Likewise for the probability that a small team assembled in the near future could build safe AGI first, but otherwise catastrophe would have ensued.
Well, I guess that’s fair enough. In the quote on the top, though, I am specifically criticizing the extreme view. At the end of the day, the entire raison d’etre for SI’s existence is the claim that without paying you the risk would be higher. The claim that you are somehow fairy unique. And there are many risks—for example, risk of lethal flu-like pandemic—which are much more clearly understood and where specific efforts have much more clearly predictable outcome of reducing the risk. Favouring a group of AI theorists but not other does not have clearly predictable outcome of reducing the risk.
(I am inclined to believe that the pandemic is under funded as it would primarily decimate the poorer countries, ending existence of entire cultures, whereas the ‘existential risk’ is a fancy phrase for a risk to the privileged)
It need not demonstrate any such thing to fit Carl’s statement perfectly and give the lie to your claim that he was misrepresenting Hutter.
Sure, hence the Hutter citation of “(Cameron 1984)”. Oh wait.